Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Range query algorithm for large scale moving objects in distributed environment
MA Yongqiang, CHEN Xiaomeng, YU Ziqiang
Journal of Computer Applications    2023, 43 (1): 111-121.   DOI: 10.11772/j.issn.1001-9081.2021101853
Abstract223)   HTML10)    PDF (3320KB)(63)       Save
Continuous range queries over moving objects is essential to many location-based services. Aiming at this issue, a distributed search method was proposed for processing concurrent range queries over large scale moving objects. Firstly, formed by a Global Grid Index (GGI) and a local elastic quadtree, a Distributed Dynamic Index (DDI) structure was proposed. Then, a Distributed Search Algorithm (DSA) was proposed based on DDI structure. At the first time, an incremental strategy of updating the query results as objects and query points continuously changed their locations was introduced by DSA. After that, in the incremental update process, a shared computing optimization strategy for multiple concurrent queries was introduced, to incrementally search the range query results of the moving object according to the existing calculation results. Finally, three moving object datasets with different spatial distributions were simulated on the basis of the German road network, and NS (Naive Search), GI (Grid Index) and Distributed Hybrid Index (DHI) were compared with the proposed algorithm. The results show that compared with DHI, the comparison algorithm with the best performance, DSA decreases the initial query time by 22.7%, while drops the incremental query time by 15.2%, verifying that DSA is superior to the comparison algorithms.
Reference | Related Articles | Metrics
Fast calibration algorithm in surgical navigation system based on augmented reality
SUN Qichang, MAI Yongfeng, CHEN Xiaojun
Journal of Computer Applications    2021, 41 (3): 833-838.   DOI: 10.11772/j.issn.1001-9081.2020060776
Abstract433)      PDF (1272KB)(858)       Save
To solve the problem of fusion of virtual scene and real scene of Optical See-Through Head-Mounted Display (OST-HMD) in Augmented Reality (AR), a fast calibration method for OST-HMD was proposed on the basis of optical positioning and tracking system. Firstly, the virtual makers in OST-HMD and the corresponding points in real world were collected as two 3D point datasets, and the transformation between the virtual space and optical positioning and tracking space was estimated to solve the transformation matrix from the virtual space to the real scene. Then, the transitive relation among matrices of the entire navigation system was built, and an AR-based surgical navigation system was designed and implemented on this basis, and the accuracy validation experiment and model experiment were conducted to this system. Experimental results show that the proposed algorithm makes the root mean square error between the virtual datum points and the corresponding real datum points achieved 1.39 ±0.49 mm, and the average time of calibration process of 23.8 seconds, which demonstrates the algorithm has the potential in clinical applications.
Reference | Related Articles | Metrics
Robust texture representation by combining differential feature and Haar wavelet decomposition
LIU Wanghua, LIU Guangshuai, CHEN Xiaowen, LI Xurui
Journal of Computer Applications    2020, 40 (9): 2728-2736.   DOI: 10.11772/j.issn.1001-9081.2020010032
Abstract331)      PDF (1923KB)(323)       Save
Aiming at the problem that traditional local binary pattern operators lack deep-level correlation information between pixels and have poor robustness to common blurring and rotation changes in images, a robust texture expression operator combining differential features and Haar wavelet decomposition was proposed. In the differential feature channel, the first-order and second-order differential features in the image were extracted by the isotropic differential operators, so that the differential features of the image were essentially invariant to rotation and robust to image blur. In the wavelet decomposition feature extraction channel, based on the characteristic that the wavelet transform has good localization in the time domain and frequency domain at the same time, multi-scale two-dimensional Haar wavelet decomposition was used to extract blurring robustness features. Finally, the feature histograms on the two channels were concatenated to construct a texture description of the image. In the feature discrimination experiments, the accuracy of the proposed operator on the complex UMD, UIUC and KTH-TIPS texture databases reaches 98.86%, 98.2% and 99.05%, respectively, and compared with that of the MRELBP (Median Robust Extended Local Binary Pattern) operator, the accuracy increases by 0.26%, 1.32% and 1.12% respectively. In the robustness analysis experiments on rotation change and image blurring, the classification accuracy of the proposed operator on the TC10 texture database with only rotation changes reaches 99.87%, and the classification accuracy decrease of the proposed operator on the TC11 texture database with different levels of Gaussian blurs is only 6%. In the computational complexity experiments, the feature dimension of the proposed operator is only 324, and the average feature extraction time of the proposed operator on the TC10 texture database is 30.9 ms. Experimental results show that the method combining differential feature and Haar wavelet decomposition has strong feature discriminability and strong robustness to rotation and blurring, as well as has low computational complexity. It has good applicability in situations with small database.
Reference | Related Articles | Metrics
Intent recognition dataset for dialogue systems in power business
LIAO Shenglan, YIN Shi, CHEN Xiaoping, ZHANG Bo, OUYANG Yu, ZHANG Heng
Journal of Computer Applications    2020, 40 (9): 2549-2554.   DOI: 10.11772/j.issn.1001-9081.2020010119
Abstract783)      PDF (826KB)(908)       Save
For the intelligent dialogue system of customer service robots in power supply business halls, a large-scale dataset of power business user intents was constructed. The dataset includes 9 577 user queries and their labeling categories. First, the real voice data collected from the power supply business halls were cleaned, processed and filtered. In order to enable the data to drive the study of deep learning models related to intent classification, the data were labeled and augmented with high quality by the professionals according to the background knowledge of power business. In the labeling process, 35 types of service category labels were defined according to power business. In order to test the practicability and effectiveness of the proposed dataset, several classical models of intent classification were used for experiments, and the obtained intent classification models were put in the dialogue system. The classical Text classification model-Recurrent Convolutional Neural Network (Text-RCNN) was able to achieve 87.1% accuracy on this dataset. Experimental results show that the proposed dataset can effectively drive the research on power business related dialogue systems and improve user satisfaction.
Reference | Related Articles | Metrics
Simulation and effectiveness evaluation of network warfare based on LightGBM algorithm
CHEN Xiaonan, HU Jianmin, CHEN Xi, ZHANG Wei
Journal of Computer Applications    2020, 40 (7): 2003-2008.   DOI: 10.11772/j.issn.1001-9081.2019122129
Abstract376)      PDF (879KB)(339)       Save
In order to solve the problems of high abstraction degree of network warfare and insufficient means of simulation and effectiveness evaluation of network warfare under the condition of informationization, a method of network warfare simulation and effectiveness evaluation integrating multiple indexes of both attack and defense sides was proposed. Firstly, for the network warfare attacker, four kinds of attack methods were introduced to attack the network; and for the network defender, the network node structure, content importance and emergency response ability were introduced as the defense indicators of the network. Then, the network warfare effectiveness evaluation model was established by integrating PageRank algorithm and fuzzy comprehensive evaluation method into Light Gradient Boosting Machine (LightGBM) algorithm. Finally, by defining the node damage effectiveness curve, the evaluation results of residual effectiveness and damage effectiveness in the whole network warfare attack and defense system were obtained. The simulation results show that the effectiveness evaluation model of network warfare can effectively evaluate the operational effectiveness of both attack and defense sides of network warfare, which verifies the rationality and feasibility of the effectiveness evaluation method of network warfare.
Reference | Related Articles | Metrics
Novel bidirectional aggregation degree feature extraction method forpatent new word discovery
CHEN Meijie, XIE Zhenping, CHEN Xiaoqi, XU Peng
Journal of Computer Applications    2020, 40 (3): 631-637.   DOI: 10.11772/j.issn.1001-9081.2019071193
Abstract397)      PDF (772KB)(363)       Save
Aiming at the poor effect of general new word discovery method on the recognition of patent long words, the low flexibility of part of speech collocation template of patent terminology, and the lack of unsupervised methods for Chinese patent long word recognition, a novel bidirectional aggregation degree feature extraction method for patent new word discovery was proposed.Firstly, a bidirectional conditional probability was introduced on the statistical information between the first and last words on a double word term. Secondly, a word boundary filtering rule was extendedly introduced by using the above feature. Finally, new patent words were able to be extracted by combining the above aggregation degree feature and word boundary filtering rule. Experimental analysis show that, the new method improves the overall F-score by 6.7 percentage points compared with the new word discovery method in the general field, improves the overall F-score by 19.2 and 17.2 percentage points respectively compared with two latest patent terminology collocation template methods, and significantly increase the F-score for the discovery of new words with 4 to 8 characters. In summary, the proposed method greatly improves the performance of patent new word discovery, and can extract high compound long words in patent documents more effectively, while reducing the reliance on pre-training processes and extra complex rule base, with better practicality.
Reference | Related Articles | Metrics
Spark framework based optimized large-scale spectral clustering parallel algorithm
CUI Yixin, CHEN Xiaodong
Journal of Computer Applications    2020, 40 (1): 168-172.   DOI: 10.11772/j.issn.1001-9081.2019061061
Abstract579)      PDF (683KB)(267)       Save
To solve the performance bottlenecks such as time-consuming computation and inability of clustering in spectral clustering on large-scale datasets, a spectral clustering parallelization algorithm suitable for large-scale datasets was proposed based on Spark technology. Firstly, the similar matrices were constructed through one-way loop iteration to avoid double counting. Then, the construction and normalization of Laplacian matrices were optimized by position transformation and scalar multiplication replacement in order to reduce the storage requirements. Finally, the approximate eigenvector calculation was used to further reduce the computational cost. The experimental results on different test datasets show that, as the size of test dataset increases, the proposed algorithm has the running time of one-way loop iteration and the approximate eigenvector calculation increased linearly with slow speed, the clustering effects of approximate eigenvector calculation are similar to those of exact eigenvector calculation, and the algorithm shows good extensibility on large-scale datasets. On the basis of obtaining better spectral clustering performance, the improved algorithm increases operation efficiency, and effectively alleviates high computational cost and the problem of clustering.
Reference | Related Articles | Metrics
Link prediction method fusing clustering coefficients
LIU Yuyang, LI Longjie, SHAN Na, CHEN Xiaoyun
Journal of Computer Applications    2020, 40 (1): 28-35.   DOI: 10.11772/j.issn.1001-9081.2019061008
Abstract441)      PDF (1137KB)(361)       Save
Many network structure information-based link prediction algorithms estimate the similarity between nodes and perform link prediction by using the clustering degree of nodes. However, these algorithms only focus on the clustering coefficient of nodes in network, and do not consider the influence of link clustering coefficient between the predicted nodes and their common neighbor nodes on the similarity between nodes. Aiming at the problem, a link prediction algorithm combining node clustering coefficient and asymmetric link clustering coefficient was proposed. Firstly, the clustering coefficient of common neighbor node was calculated, and the average link clustering coefficient of the predicted nodes was obtained by using two asymmetric link clustering coefficients of common neighbor node. Then, a comprehensive measurement index was obtained by fusing these two clustering coefficients based on Dempster-Shafer(DS) theory, and by applying the index to Intermediate Probability Model (IMP), a new node similarity index, named IMP_DS, was designed. The experimental results on the data of nine networks show that the proposed algorithm achieves performance in terms of Area Under the Curve (AUC) of Receiver Operating Characteristic (ROC) and Precision in comparison with Common Neighbor (CN), Adamic-Adar (AA), Resource Allocation (RA) indexes and InterMediate Probability model based on Common Neighbor (IMP_CN).
Reference | Related Articles | Metrics
Adversarial negative sample generation for knowledge representation learning
ZHANG Zhao, JI Jianmin, CHEN Xiaoping
Journal of Computer Applications    2019, 39 (9): 2489-2493.   DOI: 10.11772/j.issn.1001-9081.2019020357
Abstract1527)      PDF (827KB)(874)       Save

Knowledge graph embedding is to embed symbolic relations and entities of the knowledge graph into low dimensional continuous vector space. Despite the requirement of negative samples for training knowledge graph embedding models, only positive examples are stored in the form of triplets in most knowledge graphs. Moreover, negative samples generated by negative sampling of conventional knowledge graph embedding methods are easy to be discriminated by the model and contribute less and less as the training going on. To address this problem, an Adversarial Negative Generator (ANG) model was proposed. The generator applied the encoder-decoder pipeline, the encoder readed in positive triplets whose head or tail entities were replaced as context information, and then the decoder filled the replaced entity with the triplet using the encoding information provided by the encoder, so as to generate negative samples. Several existing knowledge graph embedding models were used to play an adversarial game with the proposed generator to optimize the knowledge representation vectors. By comparing with existing knowledge graph embedding models, it can be seen that the proposed method has better mean ranking of link prediction and more accurate triple classification result on FB15K237, WN18 and WN18RR datasets.

Reference | Related Articles | Metrics
Face recognition combining weighted information entropy with enhanced local binary pattern
DING Lianjing, LIU Guangshuai, LI Xurui, CHEN Xiaowen
Journal of Computer Applications    2019, 39 (8): 2210-2216.   DOI: 10.11772/j.issn.1001-9081.2019010181
Abstract449)      PDF (1131KB)(319)       Save
Under the influence of illumination, pose, expression, occlusion and noise, the recognition rate of faces is excessively low, therefore a method combining weighted Information Entropy (IEw) with Adaptive-Threshold Ring Local Binary Pattern (ATRLBP) (IEwATR-LBP) was proposed. Firstly, the information entropy was extracted from the sub-blocks of the original face image, and then the IEw of each sub-block was obtained. Secondly, the probability histogram was obtained by using ATRLBP operator to extract the features of face sub-blocks. Finally, the final feature histogram of original face image was obtained by concatenating the multiplications of each IEw with the probability histogram, and the recognition result was calculated through Support Vector Machine (SVM). In the comparison experiments on the illumination, pose, expression and occlusion datasets from AR face database, the proposed method achieved recognition rates of 98.37%, 94.17%, 98.20%, and 99.34% respectively; meanwile, it also achieved the maximum recognition rate of 99.85% on ORL face database. And the average recognition rates in 5 experiments with different training samples were compared to conclude that the recognition rate of samples with Gauss noise was 14.04 percentage points lower than that of samples without noise, while the recognition rate of samples with salt & pepper noise was only 2.95 percentage points lower than that of samples without noise. Experimental results show that the proposed method can effectively improve the recognition rate of faces under the influence of illumination, pose, occlusion, expression and impulse noise.
Reference | Related Articles | Metrics
Detection method of hard exudates in fundus images by combining local entropy and robust principal components analysis
CHEN Li, CHEN Xiaoyun
Journal of Computer Applications    2019, 39 (7): 2134-2140.   DOI: 10.11772/j.issn.1001-9081.2019010208
Abstract304)      PDF (1062KB)(209)       Save

To solve the time-consuming and error-prone problem in the diagnosis of fundus images by the ophthalmologists, an unsupervised automatic detection method for hard exudates in fundus images was proposed. Firstly, the blood vessels, dark lesion regions and optic disc were removed by using morphological background estimation in preprocessing phase. Then, with the image luminosity channel taken as the initial image, the low rank matrix and sparse matrix were obtained by combining local entropy and Robust Principal Components Analysis (RPCA) based on the locality and sparsity of hard exudates in fundus images. Finally, the hard exudates regions were obtained by the normalized sparse matrix. The performance of the proposed method was tested on the fundus images databases e-ophtha EX and DIARETDB1. The experimental results show that the proposed method can achieve 91.13% of sensitivity and 90% of specificity in the lesional level and 99.03% of accuracy in the image level and 0.5 s of average running time. It can be seen that the proposed method has higher sensitivity and shorter running time compared with Support Vector Machine (SVM) method and K-means method.

Reference | Related Articles | Metrics
Left ventricular segmentation method of ultrasound image based on convolutional neural network
ZHU Kai, FU Zhongliang, CHEN Xiaoqing
Journal of Computer Applications    2019, 39 (7): 2121-2124.   DOI: 10.11772/j.issn.1001-9081.2018112321
Abstract535)      PDF (690KB)(291)       Save

Ultrasound image segmentation of left ventricle is very important for doctors in clinical practice. As the ultrasound images contain a lot of noise and the contour features are not obvious, current Convolutional Neural Network (CNN) method is easy to obtain unnecessary regions in left ventricular segmentation, and the segmentation regions are incomplete. In order to solve these problems, keypoint location and image convex hull method were used to optimize segmentation results based on Fully Convolutional neural Network (FCN). Firstly, FCN was used to obtain preliminary segmentation results. Then, in order to remove erroneous regions in segmentation results, a CNN was proposed to locate three keypoints of left ventricle, by which erroneous regions were filtered out. Finally, in order to ensure that the remained area were able to be a complete ventricle, image convex hull algorithm was used to merge all the effective areas together. The experimental results show that the proposed method can greatly improve left ventricular segmentation results of ultrasound images based on FCN. Under the evaluation standard, the accuracy of results obtained by this method can be increased by nearly 15% compared with traditional CNN method.

Reference | Related Articles | Metrics
Pulmonary nodule detection algorithm based on deep convolutional neural network
DENG Zhonghao, CHEN Xiaodong
Journal of Computer Applications    2019, 39 (7): 2109-2115.   DOI: 10.11772/j.issn.1001-9081.2019010056
Abstract701)      PDF (1207KB)(405)       Save

In traditional pulmonary nodule detection algorithms, there are problems of low detection sensitivity and large number of false positives. To solve these problems, a pulmonary nodule detection algorithm based on deep Convolutional Neural Network (CNN) was proposed. Firstly, the traditional full convolution segmentation network was simplified on purpose. Then, in order to obtain high-quality candidate pulmonary nodules and ensure high sensitivity, the deep supervision of partial CNN layers was innovatively added and the improved weighted loss function was used. Thirdly, three-dimensional deep CNNs based on multi-scale contextual information were designed to enhance the feature extraction of images. Finally, the trained fusion classification model was used for candidate nodule classification to achieve the purpose of reducing false positive rate. The performance of algorithm was verified through comparison experiments on LUNA16 dataset. In the detection stage, when the number of candidate nodules detected by each CT (Computed Tomography) is 50.2, the sensitivity of this algorithm is 94.3%, which is 4.2 percentage points higher than that of traditional full convolution segmentation network. In the classification stage, the competition performance metric of this algorithm reaches 0.874. The experimental results show that the proposed algorithm can effectively improve the detection sensitivity and reduce the false positive rate.

Reference | Related Articles | Metrics
Urban traffic networks collaborative optimization method based on two-layered complex networks
CHEN Xiaoming, LI Yinzhen, SHEN Qiang, JU Yuxiang
Journal of Computer Applications    2019, 39 (10): 3079-3087.   DOI: 10.11772/j.issn.1001-9081.2019030538
Abstract521)      PDF (1344KB)(349)       Save
In order to solve the problems in the transfer process connection and collaboration of metro-bus two-layered network faced by the passengers making route selection in the urban transportation network, such as the far distance between some transfer stations, the unclear connection orientation and the imbalance between supply and demand in local transfer, a collaborative optimization method for urban traffic networks based on two-layered complex networks was presented. Firstly, the logical network topology method was applied to the topology of the urban transportation network, and the metro-bus two-layered network model was established by the complex network theory. Secondly, with the transfer station as research object, a node importance evaluation method based on K-shell decomposition method and central weight distribution was presented. This method was able to realize coarse and fine-grained divison and identification of metro and bus stations in large-scale networks. And a collaborative optimization method for two-layered urban traffic network with mutual encouragement was presented, that is to say the method in the complex network theory to identify and filter the node importance in network topology was introduced to the two-layered network structure optimization. The two-layered network structure was updated by identifying high-aggregation effects and locating favorable nodes in the route selection to optimize the layout and connection of stations in the existing network. Finally, the method was applied to the Chengdu metro-bus network, the existing network structure was optimized to obtain the optimal optimized node location and number of existing network, and the effectiveness of the method was verified by the relevant index system. The results show that the global efficiency of the network is optimized after 32 optimizations, and the optimization effect of the average shortest path is 15.89% and 16.97%, respectively, and the passenger transfer behavior is increased by 57.44 percentage points, the impact on the accessibility is the most obvious when the travel cost is 8000-12000 m with the optimization effect of 23.44% on average. At the same time, with the two-layered network speed ratio and unit transportation cost introduced, the response and sensitivity difference of the traffic network to the collaborative optimization process under different operational conditions are highlighted.
Reference | Related Articles | Metrics
Object manipulation system with multiple stereo cameras for logistics applications
ZHANG Zekun, TANG Bing, CHEN Xiaoping
Journal of Computer Applications    2018, 38 (8): 2442-2448.   DOI: 10.11772/j.issn.1001-9081.2018020312
Abstract915)      PDF (1260KB)(467)       Save
To meet the low cost and real-time requirements of logistics sorting, a systematic method was proposed to extract complete stereo information of typical objects by using multiple stereo cameras. Combining the cameras with an arm and other hardware, a validation and experiment platform was constructed to test the performance of this method. Two Microsoft Kinect cameras were used to measure the locations of objects in horizontal plane with accuracy of 3 millimeters. The stereo features and models of objects were calculated from the complete stereo information at processing rate of about 1 second per frame. Utilizing these features, the arm continuously picked 100 objects without failure. The experimental results demonstrate that the proposed method can extract the stereo features of objects with various sizes and shapes in real-time without off-line training, and based on which the arm can operate on objects with high accuracy.
Reference | Related Articles | Metrics
Inverse kinematics equation solving method for six degrees of freedom manipulator based on six dimensional linear interpolation
ZHOU Feng, LIN Nan, CHEN Xiaoping
Journal of Computer Applications    2018, 38 (2): 563-567.   DOI: 10.11772/j.issn.1001-9081.2017061494
Abstract421)      PDF (677KB)(355)       Save
A six-dimensional linear interpolation theory was proposed to solve the difficult problem of the inverse kinematics equation of six Degree Of Freedom (DOF) manipulator with general structure. Firstly, seven adjacent non-linear correlation nodes were searched from a large number of empirical data to compose hyper-body. Secondly, these seven nodes were used to obtain a six-dimensional linear predictive function. Finally, the predictive function was used to interpolate and inversely interpolate to predict poses and joint angles. The Matlab simulation was used to generate one million group of empirical data according to the positive kinematics equation, and the target pose was inversely interpolated iteratively to predict six joint angles. The experimental results show that compared with the Radial Basis Function Network (RBFN) and the six-dimensional linear inverse interpolation method, the proposed method can approach the target pose faster and more accuratly. The proposed method is based on data, which avoids complicated theory and can meet the requirements of robot daily applications.
Reference | Related Articles | Metrics
Research and application for terminal location management system based on firmware
SUN Liang, CHEN Xiaochun, ZHENG Shujian, LIU Ying
Journal of Computer Applications    2017, 37 (2): 417-421.   DOI: 10.11772/j.issn.1001-9081.2017.02.0417
Abstract607)      PDF (848KB)(547)       Save
Pasting the Radio Frequency Identification (RFID) tag on the shell of computer so that to trace the location of computer in real time has been the most frequently used method for terminal location management. However, RFID tag would lose the direct control of the computer when it is out of the authorized area. Therefore, the terminal location management system based on the firmware and RFID was proposed. First of all, the authorized area was allocated by RFID radio signal. The computer was allowed to boot only if the firmware received the authorized signal of RFID on the boot stage via the interaction between the firmware and RFID tag. Secondly, the computer could function normally only if it received the signal of RFID when operation system is running. At last, the software Agent of location management would be protected by the firmware to prevent it from being altered and deleted. The scenario of the computer out of the RFID signal coverage would be caught by the software Agent of the terminal; and the terminal would then be locked and data would be destroyed. The terminal location management system prototype was deployed in the office area to control almost thirty computers so that they were used normally in authorized areas and locked immediately once out of authorized areas.
Reference | Related Articles | Metrics
Knowledge driven automatic annotating algorithm for game strategies
CHEN Huanhuan, CHEN Xiaohong, RUAN Tong, GAO Daqi, WANG Haofen
Journal of Computer Applications    2017, 37 (1): 278-283.   DOI: 10.11772/j.issn.1001-9081.2017.01.0278
Abstract546)      PDF (996KB)(450)       Save
To help users to quickly retrieve the interesting game strategies, a knowledge driven automatic annotating algorithm for game strategies was proposed. In the proposed algorithm, the game domain knowledge base was built automatically by fusing multiple sites that provide information for each game. By using the game domain vocabulary discovering algorithm and decision tree classification model, game terms of the game strategies were extracted. Since most terms existing in the strategies in the form of abbreviation, the game terms were finally linked to knowledge base to generate the full name semantic tags for them. The experimental results on many games show that the precision of the proposed game strategy annotating method is as high as 90%. Moreover, the game domain vocabulary discovering algorithm has a better result compared with the n-gram language model.
Reference | Related Articles | Metrics
Design of remote wireless monitoring system for smart home based on Internet of things
DENG Yun, LI Chaoqing, CHEN Xiaohui
Journal of Computer Applications    2017, 37 (1): 159-165.   DOI: 10.11772/j.issn.1001-9081.2017.01.0159
Abstract1147)      PDF (1152KB)(1491)       Save
Based on ARM920T kernel S3C2440, embedded Web services, QT technology and wireless networking technology, a smart home monitoring system was designed. The system was composed of a host of smart home, ZigBee/Wi-Fi wireless sensor control network and smart home client software. The hardware and software design of the host of smart home was completed:the embedded Linux operating system was transplanted in the ARM platform; the embedded Web services were established by using gSOAP tool; USB to serial driver and Wi-Fi wireless LAN (Local Area Network) driver were configured; ZigBee wireless sensor control network was formed, the program design of the coordinator node and terminal node was completed, the data communication protocol was made, and the client program was designed by using QT technology. Finally, the tests of establishment of ZigBee network, terminal nodes joining the network and sensor node data transmission were done. The test results show that the sensor nodes in the network can transmit the detection information to the coordinator, and the smart home client software can complete the remote monitoring and control of home environment through the host of smart home.
Reference | Related Articles | Metrics
Moving target tracking scheme based on dynamic clustering
BAO Wei, MAO Yingchi, WANG Longbao, CHEN Xiaoli
Journal of Computer Applications    2017, 37 (1): 65-72.   DOI: 10.11772/j.issn.1001-9081.2017.01.0065
Abstract698)      PDF (1185KB)(436)       Save
Focused on the issues of low accuracy, high energy consumption of target tracking network and short life cycle of network in Wireless Sensor Network (WSN), the moving target tracking technology based on dynamic clustering was proposed. Firstly, a Two-Ring Dynamic Clustering (TRDC) structure and the corresponding TRDC updating methods were proposed; secondly, based on centroid localization, considering energy of node, the Centroid Localization based on Power-Level (CLPL) algorithm was proposed; finally, in order to further reduce the energy consumption of the network, the CLPL algorithm was improved, and the random localization algorithm was proposed. The simulation results indicate that compared with static cluster, the life cycle of network increased by 22.73%; compared with acyclic cluster, the loss rate decreased by 40.79%; there was a little difference from Received Signal Strength Indicator (RSSI) algorithm in accuracy. The proposed technology can effectively ensure tracking accuracy and reduce energy consumption and loss rate.
Reference | Related Articles | Metrics
Enhanced multi-species-based particle swarm optimization for multi-modal function
XIE Hongxia, MA Xiaowei, CHEN Xiaoxiao, XING Qiang
Journal of Computer Applications    2016, 36 (9): 2516-2520.   DOI: 10.11772/j.issn.1001-9081.2016.09.2516
Abstract1165)      PDF (769KB)(380)       Save
It is difficult to balance local development and global exploration in a multi-modal function optimization process, therefore, an Enhanced Multi-Species-based Particle Swarm Optimization (EMSPSO) was proposed. An improved multi-species evolution strategy was introduced to Species-based Particle Swarm Optimization (SPSO). Several species which evolved independently were established by selecting seed in the individual optimal values to improve the stability of algorithm convergence. A redundant particle reinitialization strategy was introduced to the algorithm in order to improve the utilization of the particles, and enhance global search capability and search efficiency of the algorithm. Meanwhile, in order to prevent missing optimal extreme points in the optimization process, the rate update formula was also improved to effectively balance the local development and global exploration capability of the algorithm. Finally, six typical test functions were selected to test the performance of EMSPSO. The experimental results show that, EMSPSO has high multi-modal optimization success rate and optimal performance of global extremum search.
Reference | Related Articles | Metrics
Extraction algorithm of multi-view matching points using union find
LU Jun, ZHANG Baoming, GUO Haitao, CHEN Xiaowei
Journal of Computer Applications    2016, 36 (6): 1659-1663.   DOI: 10.11772/j.issn.1001-9081.2016.06.1659
Abstract311)      PDF (888KB)(315)       Save
The extraction of multi-view matching points is one of the key problems in 3D reconstruction of multi-view image scenes, and the accuracy of the reconstruction will be affected directly by the extraction result. The extraction problem of multi-view matching points was converted into the dynamic connectivity problem and a Union-Find (UF) method was designed. The nodes of UF were organized using efficient tree structure in which parent-link connection was adopted. In the process of increasing the matching points, only the addressing parameters of a single node needed to be modified, which avoided comparing the calculation process of the addressing parameters through traversing the array, and the efficiencies of locating and modifying were improved. A weighting strategy was used to optimize the algorithm, and the weighted encoding method was used to replace the conventional hard encoding, which could balance the structure of the tree and reduce the average depth of the dendrogram. The experimental results of multiple image sets show that the proposed algorithm based on UF can extract more multi-view matching points, and is more efficient than the conventional Breadth-First-Search (BFS) algorithm.
Reference | Related Articles | Metrics
Optimization of extreme learning machine parameters by adaptive chaotic particle swarm optimization algorithm
CHEN Xiaoqing, LU Huijuan, ZHENG Wenbin, YAN Ke
Journal of Computer Applications    2016, 36 (11): 3123-3126.   DOI: 10.11772/j.issn.1001-9081.2016.11.3123
Abstract680)      PDF (595KB)(584)       Save
Since it was not ideal for Extreme Learning Machine (ELM) to deal with non-linear data, and the parameter randomization of ELM was not conducive for generalizing the model, an improved version of ELM algorithm was proposed. The parameters of ELM were optimized by Adaptive Chaotic Particle Swarm Optimization (ACPSO) algorithm to increase the stability of the algorithm and improve the accuracy of ELM for gene expression data classification. The simulation experiments were carried out on the UCI gene data. The results show that Adaptive Chaotic Particle Swarm Optimization-Extreme Learning Machine (ACPSO-ELM) has good stability and reliability, and effectively improves the accuracy of gene classification over existing algorithms, such as Detecting Particle Swarm Optimization-Extreme Learning Machine (DPSO-ELM) and Particle Swarm Optimization-Extreme Learning Machine (PSO-ELM).
Reference | Related Articles | Metrics
Quantized distributed Kalman filtering based on dynamic weighting
CHEN Xiaolong, MA Lei, ZHANG Wenxu
Journal of Computer Applications    2015, 35 (7): 1824-1828.   DOI: 10.11772/j.issn.1001-9081.2015.07.1824
Abstract704)      PDF (766KB)(613)       Save
Focusing on the state estimation problem of a Wireless Sensor Network (WSN) without a fusion center, a Quantized Distributed Kalman Filtering (QDKF) algorithm was proposed. Firstly, based on the weighting criterion of node estimation accuracy, a weight matrix was dynamically chosen in the Distributed Kalman Filtering (DKF) algorithm to minimize the global estimation Error Covariance Matrix (ECM). And then, considering the bandwidth constraint of the network, a uniform quantizer was added into the DKF algorithm. The requirement of the network bandwidth was reduced by using the quantized information during the communication. Simulations were conducted by using the proposed QDKF algorithm with an 8-bit quantizer. In the comparison experiments with the Metropolis weighting and the maximum degree weighting, the estimation Root Mean Square Error (RMSE) of the mentioned dynamic weighting method decreased by 25% and 27.33% respectively. The simulation results show that the QDKF algorithm using dynamic weighting can improve the estimation accuracy and reduce the requirement of network bandwidth, and it is suitable for network communications limited applications.
Reference | Related Articles | Metrics
Sensor network queue management algorithm based on duty cycle control and delay guarantee
ZENG Zhendong, CHEN Xiao, SUN bo, WU Shuxin
Journal of Computer Applications    2015, 35 (5): 1242-1245.   DOI: 10.11772/j.issn.1001-9081.2015.05.1242
Abstract392)      PDF (775KB)(522)       Save

In order to ensure that the Wireless Sensor Network (WSN) delay requirements while minimizing power consumption, a sensor network queue management algorithm based on duty cycle control and delay guarantees (DQC) was proposed. According to changing network conditions, in order to better control node duty cycle and queue thresholds, a two-way controller was used. The controller provided a delay notification mechanism to determine an appropriate sleep time and queue length for each node based on application requirement and time-varying delay requirement. And the stability of the state of two-way controller was derived based on control theory to obtain a condition of the control parameters for guaranteeing asymptotically stable steady state. Simulation results show that compared with the algorithm based on adaptive duty cycle control and performance improvement queue-based congestion management mechanism, the proposed algorithm shortened end-to-end delay of time period by 38.8% and 36.0%, reduces the average power consumption by 46.5 mW and 27.5 mW. It show better performances on the control of delay time and energy efficiency.

Reference | Related Articles | Metrics
Echocardiography chamber segmentation based on integration of speeded up robust feature fitting and Chan-Vese model
CHEN Xiaolong, WANG Xiaodong, LI Xin, YE Jianyu, YAO Yu
Journal of Computer Applications    2015, 35 (4): 1124-1128.   DOI: 10.11772/j.issn.1001-9081.2015.04.1124
Abstract397)      PDF (757KB)(549)       Save

During the automatic segmentation of cardiac structures in echocardiographic sequences within a cardiac cycle, the contour with weak edges can not be extracted effectively. A new approach combining Speeded Up Robust Feature (SURF) and Chan-Vese model was proposed to resolve this problem. Firstly, the weak boundary of heart chamber in the first frame was marked manually. Then, the SURF points around the boundary were extracted to build Delaunay triangulation. The positions of weak boundaries of subsequent frames were predicted using feature points matching between adjacent frames. The coarse contour was extracted using Chan-Vese model, and the fine contour of object could be acquired by region growing algorithm. The experiment proves that the proposed algorithm can effectively extract the contour of heart chamber with weak edges, and the result is similar to that by manual segmentation.

Reference | Related Articles | Metrics
Social network model based on micro-blog transmission
CHEN Xiao, HUANG Shuguang, QIN Li
Journal of Computer Applications    2015, 35 (3): 638-642.   DOI: 10.11772/j.issn.1001-9081.2015.03.638
Abstract977)      PDF (706KB)(681)       Save

Studying the constructing mechanism of micro-blog transmission network help to understand the information spreading process on the micro-blog platform deeply, and then obtain effective strategies and suggestions. As for this issue, a directed and weighted network model was proposed. In the model building process, according to the phenomenon that micro-blogs can be transmitted more than one time, triad formation was introduced. Different directions of links were used to represent the various characteristics of active and famous users. Besides, the dynamic evolution process of the link weight was considered. The theory analysis and simulation experiment results indicate the strength distribution, the degree distribution and the correlation of strength and degree obey power-law distribution, and the power exponents are between 1 and 3. Also, this model is characterized by high clustering coefficient and short average path length. Average clustering coefficient is 0.7, and average length is less than 6. As well, actual data of micro-blog transmission were collected to prove the model's correctness.

Reference | Related Articles | Metrics
Compressive wideband spectrum blind detection based on high-order statistics
CAO Kaitian, CHEN Xiaosi, ZHU Wenjun
Journal of Computer Applications    2015, 35 (11): 3261-3264.   DOI: 10.11772/j.issn.1001-9081.2015.11.3261
Abstract489)      PDF (803KB)(428)       Save
In cognitive radio network, wideband spectrum sensing is faced with the technical restrictions of high-speed Analog-to-Digital Converter (ADC). To cope with this issue, the probability distribution of high-order decision statistics for wideband spectrum sensing fed by compressed observations based on Compressive Sampling (CS) theory was deduced, and then a High-Order Statistics (HOS)-based Compressive Wideband Spectrum Blind Detection (HOS-CWSBD) scheme with theses compressive measurements was proposed in this paper. The proposed algorithm need neither the prior acknowledge of the transmitted signal, nor the signal recovery. Both theoretical analyses and simulation results show that the proposed scheme has lower computational complexity and more robustness to the noise uncertainty compared to the traditional spectrum sensing schemes based on CS requiring the signal recovery and the HOS-based spectrum sensing scheme with Nyquist samples.
Reference | Related Articles | Metrics
Analysis of public emotion evolution based on probabilistic latent semantic analysis
LIN Jianghao, ZHOU Yongmei, YANG Aimin, CHEN Yuhong, CHEN Xiaofan
Journal of Computer Applications    2015, 35 (10): 2747-2751.   DOI: 10.11772/j.issn.1001-9081.2015.10.2747
Abstract345)      PDF (900KB)(488)       Save
Concerning the problem of topics mining and its corresponding public emotion analysis, an analytical method for public emotion evolution was proposed based on Probabilistic Latent Semantic Analysis (PLSA) model. In order to find out the evolutional patterns of the topics, the method started with extracting the subtopics on time series by making use of PLSA model. Then, emotion feature vectors represented by emotion units and their weights which matched with the topic context were established via parsing and ontology lexicon. Next, the strength of public emotion was computed via a fine-grained dimension and the holistic public emotion of the issue. In this case, the method has a deep mining into the evolutional patterns of public emotion which were finally quantified and visualized. The advantage of the method is highlighted by introducing grammatical rules and ontology lexicon in the process of extracting emotion units, which was conducted in a fine-grained dimension to improve the accuracy of extraction. The experimental results show that this method can gain good performance on the evolutional analysis of topics and public emotion on time series and thus proves the positive effect of the method.
Reference | Related Articles | Metrics
Optimal beacon nodes-based centroid localization algorithm for wireless sensor network
CHEN Xiaohai, PENG Jian, LIU Tang
Journal of Computer Applications    2015, 35 (1): 5-9.   DOI: 10.11772/j.issn.1001-9081.2015.01.0005
Abstract701)      PDF (854KB)(588)       Save

To improve the accuracy of Centroid Localization (CL) algorithm in Wireless Sensor Network (WSN), an Optimal Beacon nodes-based Centroid Localization (OBCL) algorithm was proposed. In this algorithm, four mobile beacon nodes were used. First, the path for each mobile beacon node was planned. Second, the optimal beacon nodes were selected from the candidate beacon nodes by each unknown node to estimate location according to Set Deviation Degree (SDD). Besides, a role-change mechanism that an unknown node can assist other unknown nodes to locate as the expectant beacon node after it got its estimated location was adopted to solve the problem of beacon nodes' shortage. At last, to ensure that each unknown node could get its location, a relocation procedure was executed after the completion of the initial locating. The simulation results show that, the average locating error is respectively reduced by 67.7%, 39.2%, 24.4% comparing with the CL, WCL (Weighted Centroid Localization), RR-WCL (Weighted Centroid Localization based on Received signal strength indication Ration) algorithms. For the reason that OBCL can achieve better locating results using only four mobile beacon nodes, it is suitable for scenes which require low network cost and high locating accuracy.

Reference | Related Articles | Metrics